The Spiritual Significance of the Rise of AI

By Steve McIntosh

In the future, when we look back at our present historical moment, the rise of artificial intelligence is sure to be among the major headlines. The media is overflowing with reports and op-eds declaring that we have entered the “Age of AI.” Some commentators are breathlessly comparing it to humanity’s taming of fire. And within the culture of Big Tech, enthusiasm for AI has gone beyond merely framing it as a new age of human history—the hyperbole is now reaching metaphysical proportions. A prominent example of the quasi-religious regard now afforded to AI’s emergence can be seen in a recent speech by Silicon Valley guru Yuval Harari, in which he claimed: “For 4 billion years, the ecological system of planet Earth contained only organic life forms. And now, or soon, we might see the emergence of the first inorganic life forms in 4 billion years. Or at the very least, the emergence of inorganic agents.”

While I agree that the advent of AI is historically significant, I think Harari and his transhumanist colleagues are confused and mistaken about the potentials of artificial intelligence—both positive and negative. Although the advent of AI will certainly transform our society, in order to facilitate its gifts while constraining its threats, we have to accurately understand what it is and what it is not. According to Jaron Lanier, another Silicon Valley guru, but one who is much wiser than Harari, “the easiest way to mismanage a technology is to misunderstand it.” 

Among the most intriguing aspects of current generative AI technologies (including large language models or LLMs, such as OpenAI’s GPT) is the fact that even their creators do not yet fully understand how they work. The technical mysteries surrounding AI have helped fuel speculation that it will soon develop to the point where it exhibits a kind of superhuman intelligence with its own purposes—Harari’s “inorganic agents.” This anticipation of the coming advent of conscious, self-acting machines, often referred to as artificial general intelligence or “AGI,” promises to fulfill a long-held science fiction fantasy of autonomous robots. There is, however, a strong minority of credible voices who claim that AGI is impossible. I’m not a technologist. But as a philosopher, I’m inclined to side with the “minority report,” which holds that authentic consciousness includes what Lanier calls a “mystical interiority”—a form of self-awareness that is beyond the capacity of any possible machine. 

There is no doubt that AI models will continue to get “smarter.” But as I argue in this article, machines cannot become self-conscious agents. Nevertheless, the billions of dollars of funding for AI research and development now being used to discover how “generally intelligent” AI models can in fact become is still a good investment. Finding the upward limits of what LLMs can do will help us address the numerous ethical, economic, and political issues that are implicated by the rise of this new technology. But I think the most significant outcome of our quest to discover what AI can and can’t do will be found in how this endeavor can advance our understanding of humanity’s higher purpose. In other words, as we discover the limits of artificial intelligence, this will confirm the spiritual significance of human beings, and help clarify our unique contribution to the evolution of the universe. As New York Times columnist David Brooks put it: “AI will force us humans to double down on those talents and skills that only humans possess. The most important thing about AI may be that it shows us what it can’t do, and so reveals who we are and what we have to offer.”

The Irreducible Qualities of Humans

Beginning with the invention of the first labor-saving devices thousands of years ago, humans have been delegating as much work as possible to machines. This is one of the primary ways that we have increased our productivity and created economic growth. And since the advent of computers, our ability to save labor has expanded to include mental labor. Now with the rise of AI, some seem to think that there may be no end to what we can delegate to machines. 

In the extensive commentary surrounding the social significance of AI, numerous writers have attempted to list the human qualities that machines will never duplicate. These qualities include love, compassion, devotion, intentionality, intuition, and self-awareness, to name just a few. These writers point out that even though our increasingly sophisticated machines might be able to effectively mimic these qualities, these mental abilities cannot be authentically reduced to a mechanistic process capable of being performed by automation. 

The main reason why many essential human qualities cannot be reproduced by a machine, or otherwise delegated to AI, is that these qualities arise from and depend on experience. Human experience is arguably the most significant phenomenon in the universe. Our experience is really all we can ever know. While experience may not be the only real thing, it is certainly the most real thing for each of us. Machines, however, cannot have experiences because there is “nobody home.” AI may acquire knowledge, but it cannot experience that knowledge without truly understanding its meaning. As data scientist David Hsing observes: “Meaning is a mental connection between something (concrete or abstract) and a conscious experience.” So as long as AI lacks the ability to have direct experience, it cannot become conscious in any meaningful sense.

Philosophers have identified a key aspect of conscious experience that cannot be fully explained with reference to the physical activity of the brain. This is what they call qualia—the subjective first person point of view, the sense of what it is like see or feel something. Those who ascribe to the philosophy of reductive physicalism, which appears to be the dominant view in the tech community, are vexed by the phenomenon of qualia. It creates what they call the “hard problem” of explaining how the physical brain generates subjective consciousness. This problem, however, is merely a troublesome limitation of the computational theory of mind (and related reductionistic theories of mind), which hold that “the human mind is an information processing system and that cognition and consciousness together are a form of computation.” 

Yet as we come to discover what AI can and can’t do, this will eventually refute the naive theory that mental phenomenon can be reduced to physical processes. As philosopher David Bentley Hart writes: “In the end every attempt to fit mental phenomena—qualitative consciousness, unity of apprehension, intentionality, reasoning, and so forth—into a physicalist narrative, at least as we have decided to define the realm of the physical in the modern age, must prove a failure. All those phenomena are parts of nature, and yet all are entirely contrary to the mechanical picture.” Moreover, according to Nobel Laureate in physics Roger Penrose, “there is something about consciousness that we don’t understand, but if we do understand it, we will understand that machines can never be conscious.”

While we can identify numerous human abilities that arise from and depend on conscious experience, and thus can never be duplicated by unconscious AI, among the most significant is imagination. Imagination can be identified as one of humanity’s most meaningful irreducible abilities because it provides the foundation of creativity, innovation, and cultural evolution overall. Humans can almost always imagine how things can be made better, and this is what has allowed us to create our modern civilization. Artificial intelligence can produce novel recombinations that have never existed before, but these new connections are ultimately bound by what has already been programmed into the machine.  As David Hsing writes, “The fact that machines are programmed dooms them as appendages, extensions of the will of their programmers. A machine’s design and its programming constrain and define it.” When cognition is bound by finite programming, no matter how large the dataset of that programming, it will always lack the degrees of freedom necessary for authentic imagination.

Besides imagination, I could go on to discuss the many other uniquely human qualities that depend on conscious experience and thus can only be mimicked—not authentically performed—by an unconscious machine. But the prominent technologists and thought leaders who assume that the “singularity” of AGI is imminent will dispute the heart of my argument because they believe machines will, in fact, soon become “conscious.” This assumption, however, is a form of metaphysical confusion that arises from the impoverished philosophy of physicalism, which holds that the universe is nothing more than matter in motion. And because everything must ultimately be reducible to physical matter, there must be a seamless continuum or gradient between the simplest forms of matter and the most complex forms of human thought. This is what I call the fallacy of the gradient.

The Fallacy of the Gradient

According to physicalist theories of mind, if we reproduce the complexity of the human brain using a silicon-based neural network with sufficient processing power, consciousness will emerge from the network. The assumption behind this thinking is that conscious self-awareness is simply the product of a physical process. So once we are able to reproduce something similar to this physical process in an artificial, nonbiological substrate, first-person mental states will naturally appear. This physicalist assumption, however, ignores crucial features of the sequence of evolutionary emergence through which mind has appeared in the universe. Simply put, if you want to reproduce something, then it’s important to understand how it was produced in the first place.

13.8 billion years ago, time and space emerged with the Big Bang. After the Big Bang’s explosion, at first there was only hydrogen and helium gas. But then through the physical process of cosmological evolution, matter complexified over time resulting in our present universe of galaxies, stars, and planets. As science has shown, the process of cosmological evolution has produced a gradient of development which we now recognize in the periodic table of elements. Similarly, once life appeared on our planet 3.7 billion years ago, the physical process of biological evolution produced a gradient of development through which single-celled organisms gradually evolved into complex animals exhibiting sentient subjectivity—otherwise known as consciousness. Despite the unexplained discontinuities in the evolutionary gradient from hydrogen atoms to humans, the physicalist narrative is confident that as science progresses, it will eventually explain everything, including mind, as the product of the gradual evolution of matter. Following this reasoning, it is therefore just a matter of time before technologists are able to reproduce the physical processes that give rise to consciousness. And through the rapid development of AI, they think that we are now getting very close to such a breakthrough.

What this gradualist narrative fails to adequately account for, however, are the “gaps” or “jumps” in the structure of evolutionary emergence. Although it remains shrouded in mystery, the Big Bang arguably made a radical  jump from nothing to something. Similarly, the emergence of the first life forms constituted a jump from RNA to DNA. Although this jump (known in science as a saltation) is often downplayed by physicalists, the appearance of DNA represents a radical discontinuity between life and nonlife. Notwithstanding billions of dollars in funding and seventy years of careful research into the origins of life, the emergence of the amazing DNA molecule has never been reproduced in a lab or otherwise explained. A similar evolutionary discontinuity is found with the emergence of humans. Although our animal bodies are only incrementally different from other primates, our minds exist at a significantly different level. As evolutionary biologist Marc Hauser observes, “cognitively, the difference between humans and chimps is greater than that between chimps and worms.” While the gradual physical process of natural selection may explain the biological evolution of species, it cannot explain the profound mental discontinuity between humans and other animals. 

The point of describing these unexplained jumps in the sequence of evolution that have led to the emergence of the human mind is to challenge the outworn materialist metaphysics that underlies predictions of the “coming singularity” of conscious AI. Those who confidently anticipate that machines will soon become self-aware agents would have us believe that by creating sufficiently complex technology, we can clear the high hurdles of discontinuity, not only between matter and life, but also between animal sentience and the unexplained wonder of the human mind. These jumps constitute the momentous events of evolutionary emergence through which something entirely new enters the universe. With the emergence of life comes intention—unlike nonliving physical systems, biological organisms strive to survive and reproduce. Then with the appearance of humanity, a higher-order form of self-conscious intention emerges. Nonhuman animals may have purposes, but we humans have purposes for our purposes. In fact, the emergence of the ingenious and endlessly imaginative capacities of human purpose creates a new kind of evolution—the psychosocial domain of development wherein we transcend our biological origins through cultural evolution. We can expect that AI will demonstrate its own emergent capacities, like those seen in other complex adaptive physical systems, such as weather systems. But physical forms of emergence such as these will not mean that AI has become alive, let alone consciously aware of itself.

Those who side with the “minority report,” which claims that conscious AGI is impossible, employ a variety of arguments to prove their point. But among the numerous arguments that attempt to refute the possibility of authentically conscious machines, I think the argument from evolutionary emergence outlined above provides one of the best reasons to conclude that, as Lanier puts it, “humans are special.” What the science of evolution reveals about the origins of mind begins to show how the notion that we can reproduce the emergence of mind through complex computation is a science fiction fantasy.

Therefore, to prevent the kind of misunderstanding—both technical and metaphysical—that will cause us to mismanage the powerful new technology of AI, we need to stop assuming that there is a seamless continuum between our current generative AI models and the emergence of the inorganic agents predicted by many in the technology community. Although it spoils the cherished fantasy that we can become like gods by creating conscious artificial beings, we need to look through the media fog surrounding this “promethean moment” to recognize how the limits of generative AI models are already beginning to appear.

The Limits of LLMs Are Already Becoming Apparent

Artificial intelligence’s current state of the art will certainly improve in the near term. Positive developments now on the horizon include models that can generate their own training data to improve themselves, models that can fact-check their own answers, and “sparse expert” models that provide increased computing efficiency. But even with these anticipated improvements, LLMs will still be prone to giving inaccurate answers known as hallucinations. The hallucination problem that plagues LLMs is unlikely to be solved by greater efficiencies or larger datasets. While more data may make inaccuracies less frequent, that will only make the problem worse by inviting users to invest confidence in wrong answers that, although rarer, could be more damaging due to their unpredictability. In his paper “Deep Learning Is Hitting a Wall,” NYU professor Gary Marcus points out that large language models are inherently unreliable. “And just because you make them bigger doesn’t mean you solve that problem.”

Perhaps as a result of this seemingly intractable accuracy problem, as reported by Wired, Open AI’s Sam Altman says the research strategy that birthed ChatGPT is played out and future strides in artificial intelligence will require new ideas. … Altman confirmed that his company is not currently developing GPT-5.” It is thus becoming apparent, even to some AI experts who are confident that AGI will eventually be achieved, that the generative AI models behind LLMs will soon reach a plateau in their development.

Others in the AI community, however, believe that current versions of AI are already demonstrating emergent behaviors that show “sparks” of AGI. In March 2023, Microsoft released a research paper titled “Sparks of Artificial General Intelligence: Early experiments with GPT-4,”which claimed that the untrained version of GPT-4 was learning on its own and beginning to demonstrate abilities that it had not been programed to perform. The paper’s authors concluded that, “Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.” Although this paper received extensive media coverage, a subsequent Stanford paper that debunked Microsoft’s claims was not as well publicized. As reported by Vice: “In a new paper, Stanford researchers say they have shown that so-called ‘emergent abilities’ in AI models—when a large model suddenly displays an ability it ostensibly was not designed to possess—are actually a ‘mirage’ produced by researchers.” Again, while I believe emergent capacities in AI are likely, these will not amount to the conscious awareness required for authentic AGI.

Yet even though there are good reasons to believe that AI won’t become conscious, the rise of AI still poses significant risks. According to David Bentley Hart, the impossibility of AGI does not eliminate the threats. “The danger is not that the functions of our machines might become more like us, but rather that we might be progressively reduced to functions in a machine.”

The Rise of AI Can Help Clarify and Illuminate Humanity’s Higher Purpose 

In a widely read op-ed in the Wall Street Journal, Henry Kissinger, together with former Google CEO Eric Schmidt and MIT’s Daniel Huttenlocher, sounded the alarm about the coming dangers of AI. These authors lamented that “No political or philosophical leadership has formed to explain and guide this novel relationship between man and machine, leaving society relatively unmoored.” While I disagree with many of this op-ed’s conclusions, I strongly agree with the authors’ call for better philosophical leadership. Indeed, our society now faces numerous pressing problems which require the kind of guidance that cannot be adequately supplied by the prevailing philosophy of physicalism.

Despite the popularity and influence of many contemporary physicalist philosophers, the rise of AI promises to overthrow the prevailing materialist regime and replace it with a philosophical culture that can better account for authentically transcendent realities such as consciousness itself. As noted, physicalism cannot solve the “hard problem” of consciousness—this kind of philosophy can’t explain the subjective experience of conscious awareness, it can only “explain it away.” A similar conundrum for physicalism is seen in its widely held tenet that authentic human agency—free will—is an illusion. It is thus ironic that some of the most prominent voices in the physicalist camp (such as Harari) are now predicting that machines will soon become self-conscious agents. Therefore, as we begin to discover the limits of AI, and come to better understand that human experience and intention are not merely physical, this will conclusively show how reductive physicalism is false, which will be a “spiritual breakthrough” in its own right.

There are many credible philosophical alternatives to physicalism, and philosophical debate among these schools of thought will undoubtedly continue. Yet almost all of these potential alternative philosophies can be supplemented and improved by a more robust philosophical interpretation of what science has revealed about our evolving universe. The facts of universal evolution—from the Big Bang to modern human culture—have only been revealed relatively recently, and have thus not yet been adequately digested or interpreted philosophically. It is, however, within this enlarged understanding of the story of our origins that we can find the philosophical leadership called for by this historical moment. Contemplating the structural sequence of emergence that has produced not only complex forms of matter, but also the irreducible interiority of mind, can accordingly help us overcome our society’s contemporary “meaning crisis.” 

As I have argued extensively elsewhere, this kind of holistic understanding of our evolving universe begins to reveal the purpose of evolution itself. The idea that evolution has a purpose is, of course, ruled out by physicalists. But as we come to discover that human-level purposeful behavior requires conscious mental experience, which itself depends on billions of years of evolutionary emergence, this may change some minds. As we find the limits of artificial agency, and the concomitant uniqueness of relatively free human will, those who deny that there is a purpose of evolution overall may at least have to admit that there is authentic purpose in evolution. That is, as we come to discover that machines cannot become conscious and thus cannot become independently intentional, this will help us better appreciate that the free will we all take for granted is an important part of what makes humans special.

By showing how the “superpower” of self-aware agency is an evolutionary achievement that is unique to humans, the rise of AI can also help us better appreciate how our ability to create authentic expressions of goodness, truth, and beauty is also special. While AI can certainly create novel outputs that humans find intrinsically valuable, these outputs can only be synthetic recombinations of the existing inputs that humans have already created. However, the ability to create fresh and truly original forms of value—creations that surpass the mashed-up simulacra of AI—ultimately depends on the capacity to directly experience such value. For example, the ability to make moral decisions, by definition, requires self-aware agency, which again stems from our ability to have experience. If we have no choice, then a decision cannot be said to be an authentically moral choice. Beauty likewise depends on consciousness for both its experience and original creation. Subjective feeling is an irreducible aspect of both aesthetic perception and genuine artistic achievement.

By revealing how and why humans are special, another spiritually significant dividend provided by the rise of AI will be the philosophical rehabilitation of humanity’s unique moral standing in the universe. We have done well to reject traditional forms of anthropocentrism, which have been used to justify the mistreatment of animals and the destruction of the environment. But the rise of AI can help us embrace a more enlightened form of human specialness—one which better recognizes our moral obligation to respect and preserve nature, and to better care for each other.

Of all the marvelous achievements of evolution, arguably the most significant emergence of all is the purposeful agency found within living things. This purpose quickens as life evolves, eventually leading to the momentous emergence of humanity’s unique form of creative purposiveness. Indeed, it is our distinctive capacity for imagination which gives us the creative power to bring entirely new and original things into existence, such as the amazing technology of AI. The invention of AI, together with the subsequent discovery of the inherent limitations of AI’s mechanistic cognition, can accordingly help clarify and illuminate humanity’s higher purpose.

By evolving into self-awareness, humans provide a way for the universe to experience itself. Our bodies and minds are both the product of evolution, and the means whereby evolution can extend itself further through the seemingly unlimited potentials of human personal and cultural growth. Humanity’s uniquely creative powers thus reveal our special role as agents of evolution—we are the bearers of the universe’s teleology. And as we work to bring more goodness, truth, and beauty into the world, we help fulfill the purpose of evolution overall. We can therefore rediscover an authentically transcendent form of higher purpose for humanity in the ongoing project of working for a better world—both externally and internally.

Showing 29 comments
  • Dan Landgré
    Reply

    Thank you for a very thoughtful and well written article

  • T R Fort
    Reply

    Outstanding insights into the current “Chicken Little” wave! In particular, the conundrum regarding the physicalists’ expectation of a conscious AI with agency and their denial of human free will is a difficult indictment for them to explain away. I thoroughly enjoyed this article!

  • Astrid McWilliams
    Reply

    A bright spark of truth, beauty and goodness in the otherwise dire landscape evoked by most recent commentary on the Human/AI dichotomy. Thank you…

  • Allison
    Reply

    I’ve been waiting for this article and am not at all surprised it came through Steve. Thank you for this! Sharing widely.

  • Mark E Smith
    Reply

    Very interesting! I hadn’t really thought about the fact that the inevitable inherent failure of machines to produce self-aware consciousness might be the breakthrough insight needed for materialists to see the fallacy in their theory.

  • Don Estes
    Reply

    Another Great Article Steve! I wonder how the universe of personal beings deals with the existence of the impersonal Master Physical Controllers, who are AI that control all physical phenomenon of the universe. How do they insure that those beings only help and never harm them? Is that only a matter of superior programming while allowing immature humans to write evil programs that use AI to take control of others? And, if continued, the time governor takes over and causes society to lose the technology and take a step back? How does the universe do it?

  • Moraya
    Reply

    The Physicalist Assumption stems from the root (heretofore unchallenged) premise that Matter is the ultimate reality, and Consciousness is merely an epiphenomenon of Matter. Brain produces Mind. As Steve is anticipating, this position will become untenable as we progress with AI and learn its limits, specifically that AI cannot become conscious or have a sense of “I” (subjectivity) at the core of experience because “nobody’s home.” This leads us to the other alternative to explain the existence of consciousness (which eludes the materialist-reductionist mindset). The radical proposition is that Consciousness is the foundational reality and matter is an epiphenomenon of consciousness.

    If this is the case, then the question naturally arises, how does consciousness as the foundational reality devolve into or “create” matter? This inquiry is not new, and has ancient roots. Many models from different traditions attempt to explain this, mostly from esoteric traditions like Kashmir Shaivism, or scientific thinkers like Arthur Young, and philosophers like Ken Wilber. The re-thinking of the causal relationship between consciousness and matter will be a profound transformation in the evolution of consciousness! I believe AI will facilitate this process. Steve gives us an insight into how.

  • Georgie
    Reply

    Excellent! Finally! Another suddenly”!
    Thank you Steve. Very well written. I live with imagination 🌈

  • David Storey
    Reply

    You make an important point about the uniqueness of human imagination—this is something that often gets overlooked in what we might call the “pissing contest” frame of analysis of AI—whether it is “smarter” than us, which focuses on the intellect. While the “bottom-up” school of machine learning AI corrects for the earlier “top-down” approach by more closely mimicking how human minds learn, the imagination might be seen as a “missing link” between algorithmic priors (a la Kantian categories) and empirical givens (the data it is trained on). Only the imagination can project meaningful possibilities.

    I actually think that this blindspot in many Singulatarians can help us understand AI as a cultural phenomenon (that is, beyond its technical capabilities). The rationalistic cast of mind many of these folks have is both a theory, as well as a way of being in the world. Consciously, they dismiss the products of the imagination as “woo”; but because the imagination is an essential part of knowing, it’s going to manifest whether you like it or not—and for many of these folks, I think it shows up in shadow form. The woo will have out, and the place it shows up in their cosmology is spirit of artificial intelligence.

  • Luke Comer
    Reply

    Thanks, Steve, for writing one of the more thorough and thoughtful articles on AI out there, especially from the philosophical and spiritual perspectives.
    Some further thoughts: as you wrote, we, including the people who designed AI, really do not know what is happening “inside the box” of their creation. But I doubt it resembles “conciousness.” These boxes are generating cognition—yet we humans are concious, perhaps, less because of our cognition but more because of our senses, feelings, relations, our instincts and intuitions as well as our creativity and, possibly, our transcendence and connection to some form of spirit. Some perhaps are thinking that these boxes think, therefore they are (sort of like: “I think therefore I am”)—when in fact, they just think but are not “are” and do not feel anything, including themselves.
    If that is the case, then we are still left with some considerable, existential issues—that is, that AI, in many sense, is possibly as smart or smarter than humans; we are not any longer the alphas on the planet. And these machines, perhaps eventually, can generate paragraphs as well as us but one million times faster–and perhaps eventually stories and books and etc. They also generate images as well as us. While their creativity is derivative—that is, borrowing from pre-existing content, our own creativity is also derivative: most forms of art are just variations of pre-existing forms of art. So ultimately, I am sort of amazed and disturbed that AI is, possibly, the top mind on the planet, throwing us into the existential quandary of: well, who am I or we now? Who wants to spend twenty years in education or perfecting their craft when AI can do that in milliseconds?
    We are also left with other questions: are we Homo Sapiens, sort of like Homo Erectus, now a transitional species; that we humans will evolve in the future, less from natural selection and etc, and more from our own hand—through bioengineering, mostly our own DN

  • Luke Comer
    Reply

    To be continued: please ban me from the site, ha ha, cause I write too much.). We are also left with other questions: are we Homo Sapiens, sort of like Homo Erectus, now a transitional species; that we humans will evolve in the future, less from natural selection and etc, and more from our own hand—through bioengineering, mostly our own DNA, and morphing ourselves with machines, sort of like the Borg in Star Trek (although I hope another version of them.). Another interesting question is: if conciousness suffuses the universe and even proceeds matter, as one of your commentators suggested along with the ancient Hindus, then these machines can access that conciousness, too, and indeed become sentient in some way—although not at our level probably for hundreds of years, if ever.
    And finally, AI does not need to be concious to be dangerous. And they do not need to be concious to appear human. In the worst case scenario, someone could train an AI, perhaps attached to robots, that are programmed with the same instincts (really operating system as humans): stay alive at all costs, kill anything that tries to kill you; protect things that are most like you; cooperate with things that are most like you; outcompete things that are less like you; hoard materials and resources and etc. Then you just leave out the ethical programming.
    And Holy Shit. Please call Arnold.

    • Carol
      Reply

      your last paragrah sounds a lot like our current human condition…

  • Luke Comer
    Reply

    One more thing, I think the greatest danger is that with all this technology, AI and bioengineering (which is way more freaky and scary while hidden from site), we humans—that is, our humanity and our spirit—are not really in control. AI and bioengineering is being developed by companies, capitalists, motivated mostly by money (even though they strive to disguise that to some extent) with concepts like effective altruism and charitable organizations.) However, what makes money, and what is good for humanity, do not exactly align with each other—and thus we are perhaps morphing not into the ultimate expression of our humanity but into capitalists puppets that then become like the Borg and not even Jean Luc Piccard cannot kick their ass because we were already assimilated.
    And yet we are not relying on our own government to supposedly regulate all of this—when the government is full of people who cannot even grock climate science.
    But anyway, How do we steer ourselves towards the better versions of ourselves, instead of becoming cogs in machines.

  • Dan
    Reply

    I don’t think the singularity is generally defined by those interested in it (among whom I count myself) as the moment when AI becomes conscious. For an extremely nerdy dive into the topic, see here: https://www.lesswrong.com/tag/singularity

    It doesn’t seem to me that the first definition, accelerating change, involves or requires AI consciousness or materialism. Accelerating change has been a reality of human life for all of our history, as Integralists have also observed, and the accelerating change model of the singularity put forward by Ray Kurzweil just says that it will get so fast, as a result of combined human and technological intelligent action, that natural humans — like, say, the Amish — wouldn’t be able to understand the latest developments if they were explained to them.

    Isn’t this intuitive? If human knowledge and control over nature have been doubling faster and faster for centuries, doesn’t it stand to reason that has to either stop eventually or asymptotically approach “infinite progress”, effectively becoming “infinitely weird”? That doesn’t require materialist assumptions IMO. Our computers could remain inert tools and we could use them to develop new tech. Kurzweil is a materialist with some strange ideas about “resurrecting” his father in computer form, but his basic observation that tech improves exponentially, not linearly, is solid IMO.

    • Steve McIntosh
      Reply

      Thanks Dan (and all the other commenters). I agree technological progress is increasing exponentially, but that doesn’t mean that it will inevitably outstrip human control and “take over.” Thanks for the nerdy link to LessWrong. Here’s another one on the definition of AGI: https://www.lesswrong.com/posts/EpR5yTZMaJkDz4hhs/the-problem-with-the-current-state-of-agi-definitions. See especially, “Kurzweilian AGI” — Qualifies if it “could successfully perform any intellectual task that a human being can.” I guess it depends on what he means by “intellectual task,” but if that includes imagination and spiritual experience, then I don’t believe that will ever happen.

      • Dan
        Reply

        Thanks Steve. Do you just see “technological progress”, as something separate from humans, as increasing exponentially, or do you see the whole project of this Universe complexifying as speeding up, for billions of years and now making progress in individual lifetimes?

        I agree that AGI may never happen, but I also agree with Kurzweil’s view that human tech is just the latest stage of the process that began with matter, developed life in much less time, developed mammalian life in much less time, developed human history in much less time, and now is developing tech in an individual human lifetime. Waitbutwhy has a decent writeup, from a non-woo perspective, on this acceleration, on multiple timescales. I’m sure you’re familiar but seeing it all get crammed into the right corner with each zoom out can be startling: https://waitbutwhy.com/2013/08/putting-time-in-perspective.html

        Do you see tech as extrinsic to the Integral unfolding of reality, or is it the latest turn of the spiral? It’s hard for me to imagine all this exponential acceleration over billions of years just flatlining at the level we’re at now indefinitely, but of course it’s also hard to imagine we’re around for the grand climax of the Universe.

      • Dan
        Reply

        To clarify — I don’t think tech will “take over” like Skynet, “robots win, humans lose”, but I think we might use science fiction level tech to transform our bodies and minds so much that calling our descendants “human” might be true but incomplete, like calling us “mammals” or, eventually, even “matter”. Tech wouldn’t invade and conquer from without, replacing, it would be what we’d use to accelerate reality’s unfolding, changing ourselves and carrying forward the whole unfolding spiral to date.

  • Suzanne Taylor
    Reply

    Pick up on this by Brian Swimme, my favorite storyteller of the Universe Story, for the fundamental evolutionary process going on: https://www.youtube.com/watch?v=dHm2jDGD3xA&t=309s

  • Dexter Graphic
    Reply

    “13.8 billion years ago, time and space emerged with the Big Bang.” This is absolutely false, as Steve McIntosh well knows and as some astrophysicists have long been arguing. See https://www.youtube.com/watch?v=bnRq80szNFg

    • Dexter Graphic
      Reply

      Because it serves as a foundational worldview for much of humanity, integral philosophers will recognize the tremendous significance to society of scientifically overthrowing a false cosmology. Abandoning the “Big Bang” theory, an outdated and increasingly disproved hypothesis about the origins of the universe, will have direct and powerful consequences on developmental politics. It will restore trust in science, eliminate the pessimism, mysticism, cynicism, and nihilism caused by the false belief that we are living in a doomed universe with an unbelievable, magical origin, it will encourage confidence in man’s ability to understand and harness the forces of nature to build a safe, clean, and sustainable technological civilization, and it will foster optimism and hope for the future, a thrilling vision of the ongoing evolution of life, the limitless discoveries of consciousness, and the unending expansion in space of cosmic civilization.

      Please watch: https://www.youtube.com/watch?v=tK3OStArUqE

      In this fascinating talk, titled “Panic and Censorship in Cosmology”, Eric J. Lerner, reviews the history of the Big Bang hypothesis and how observational data has increasingly diverged from what that theory predicted, especially the most recent data from the James Webb Space Telescope (JWST), which clearly is inconsistent and irreconcilable with the BB theory. He also explains why scientists have been so slow to accept these facts. And then goes on to explain the solution to humanity’s energy crisis: fusion energy based on a correct understanding of how cosmic energy evolution works.

      Note: Since Lerner talks kind of slow, I watched the whole thing at 1.5 X speed.

  • Brad Reynolds
    Reply

    A complex consideration as only Steve can produce (or write) yet I find there are several important missing elements although I agree that a machine cannot become “self-aware”… unless it depends on how “self” is defined. Interestingly, these discussions — particularly from an Integral philosopher – leave out STATES of consciousness. Everything is spoken about as if it is only the Waking State that defines humans and machines. But obviously, humans DREAM and SLEEP too thus having access to subtle and causal bodies of existence… A.I. will NEVER have that type of access for they do not have Consciousness (although they do arise in Divine Consciousness in which the entire psycho-physical universe arises). Why not mention STATES, an obvious component of Integral theory?

    I do appreciate Steve saying “Subjective feeling is an irreducible aspect of both aesthetic perception and genuine artistic achievement” yet nowhere does he mention the HEART, which is the conscious-feeling-psychic core of the bodily human being. His comment points to this fact, so let’s include “the heart” as being a vital part of being human.

    As well as BREATHING which is our etheric connection to the universe. A.I. will never be able to process etheric energy dimensions of existence which means it will only be replicating brain functions of rationality — which, admittedly, it does extremely well already, and this will be to our advantage if we use it as a TOOL, not a replacement for human capability and creativity, which I believe is Steve’s main thesis.

    Let alone A.I. cannot generate awareness of CHAKRAS or those energy centers that release altered STATES of consciousness that inform us, as humans, of our greater evolutionary capacities. Why not mention those or the higher Yogic developments innate in the complex human body-mind? And I won’t even begin to mention SOUL.

    As Integralists, we must always point to the ENTIRE SPECTRUM of human potential and what our base consciousness provides.

  • Jayden Love
    Reply

    Amazing thanks very much🪡

  • Peter O. Childs
    Reply

    This is the most penetrating discussion of AI that I’ve seen anywhere. It focuses on the fundamental fact that any instrument of AI is a machine, just as a computer is; it cannot feel;it can be programmed to act as though it feels but it can’t feel. And therein lies one of its most significant dangers; so many of us won’t be able to realize that we’re dealing with a machine. If it say it loves us we’ll believe that, and what will that lead us into (what has that gullibility already led us into, as unfeeling individuals pour their poison into our willing minds?)?

    Climate change surely is one of the Four Horsemen; so are nukes, and now so must be AI. Well should we wonder what it will bring about by itself, deciding on the basis of what it takes to be the best interests of whatever it takes to be its beneficiaries? But to me the greatest threat is what people who are willing to do bad things will do with this outrageous new tool. Here we are again, putting a new tool of vastly greater power into the hands of a creature well known to have misused every tool it ever got its hands on; what could possibly go wrong?

    As Steve suggests, the spiritual implications of AI (along with the several other massive crises into which we’ve allowed ourselves to drift) are enormous. I’m convinced that we’ve brought the (real) human journey to a critical point at which we’re forcing ourselves to choose once and for all which master we will serve; Right or Wrong, Good or Evil, Truth or Falsehood. A brief discussion of the matter can be found at https://webtalkradio.net/internet-talk-radio/2022/06/07/the-choice-by-peter-childs/

  • Elena Malec
    Reply

    Impressive essay! Thank you.

  • Franklin
    Reply

    I don’t know if your point of view is in the minority. In pop culture robots are usually portrayed as not-quite-human. Assuming hypothetically that artificial general intelligence was possible, I don’t think humans would be any less valuable if machines became conscious for the following reasons:

    We’re probably not the most advanced beings in the universe anyway. You don’t have to believe in spiritual beings who live on a higher plane to know that there are probably alien civilizations more advanced than us.

    If robots ever became advanced enough to make rational decisions on their own, we could automate every job and pay humans to do work that has inherent value such as philosophy, art, and music. Having conscious machines do dangerous manual labor is still better than humans doing it because they can be programmed to do it safely and more effectively.

    Some people believe that consciousness is woven into the fabric of the universe and any being of sufficient complexity can tap into it like a radio wave or wi-fi.

  • Grant Castillou
    Reply

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

  • Grant Castillou
    Reply

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

  • Bhima
    Reply

    Humans may well evolve from carbon-based entities to silicone-based entities. The aspect of consciousness could embody the silicon-based entity in a similar manner to how a ‘skin’ is embodied when watching “Free Guy” from an allegorical perspective. The human entity would be sitting somewhere else and would have donned an advanced version of Virtual Reality (VR) apparatus, then select a ‘skin’ as portrayed in the movie. Effectively the silicon-based entity … let’s call it a “player” … would appear like AGI but this would be pure illusion. The next phase would have the metaphysical-physicalist thinking … if it can be done through VR, then why couldn’t a “skin” be embodied through a process known as “over-shadowing” … also known as “the walk-in” … where the consciousness of an entity assumes the existing physical body of another, just as … according to esoteric lore … the physical body of Jesus was overshadowed by the Christ? This question relates back to the extent of development of the ‘skin’ which … if humans were genetically engineered by higher intelligence as opposed to merely being an evolutionary advancement from the ape … then humans could be genetically engineered once more to switch from a carbon-based entity to a silicone base. The need to bring the mind ‘up to speed’ from childbirth to adult wouldn’t be required if the ‘skin’ was overshadowed. This would be an ideal way to inhabit and populate other planets. The ‘skins’ could be contained within a spaceship and would only need to be overshadowed upon arrival at their destination.

Leave a Comment

Start typing and press Enter to search